Algorithms are increasingly used by governments and businesses to make 'automated', data-driven decisions which can have far-reaching consequences with little transparency, scrutiny or accountability. Although algorithms’ superficial appearance of objectivity appears to remove biases in human decision-making, algorithms always reflect the assumptions of those who designed them and the patterns in the data on which they are trained.
Without intervention and oversight, the natural state of data-driven technologies is to replicate past patterns of structural inequality that are encoded in data, and project them into the future.
It is vital that policymakers understand this.
To avoid this, those who use algorithms to make decisions which affect people’s lives – whether to inform exam results, or aid decisions around hiring, pay and promotions – must take active and deliberate steps to ensure algorithms promote equality rather than entrench inequality.
Our analysis shows that the current legal framework and regulatory regime has not kept pace with recent developments in what technology is capable of, or with how it is routinely deployed. In particular, existing equality and data protection legislation is insufficient to provide protection and redress for those placed at a disadvantage by assumptions baked into algorithms.
Our cross-disciplinary Equality Task Force, chaired by Helen Mountfield QC, makes a series of recommendations, based around proposed new legislation, to ensure that algorithms are used in a fair and transparent way, and that people are properly accountable for their decisions about their design and use.
Our focus is the future of work but our analysis and recommendations may inform a wider debate about AI regulation.